privacy invasion
Embedding Large Language Models into Extended Reality: Opportunities and Challenges for Inclusion, Engagement, and Privacy
Bozkir, Efe, Özdel, Süleyman, Lau, Ka Hei Carrie, Wang, Mengdi, Gao, Hong, Kasneci, Enkelejda
Recent developments in computer graphics, hardware, artificial intelligence (AI), and human-computer interaction likely lead to extended reality (XR) devices and setups being more pervasive. While these devices and setups provide users with interactive, engaging, and immersive experiences with different sensing modalities, such as eye and hand trackers, many non-player characters are utilized in a pre-scripted way or by conventional AI techniques. In this paper, we argue for using large language models (LLMs) in XR by embedding them in virtual avatars or as narratives to facilitate more inclusive experiences through prompt engineering according to user profiles and fine-tuning the LLMs for particular purposes. We argue that such inclusion will facilitate diversity for XR use. In addition, we believe that with the versatile conversational capabilities of LLMs, users will engage more with XR environments, which might help XR be more used in everyday life. Lastly, we speculate that combining the information provided to LLM-powered environments by the users and the biometric data obtained through the sensors might lead to novel privacy invasions. While studying such possible privacy invasions, user privacy concerns and preferences should also be investigated. In summary, despite some challenges, embedding LLMs into XR is a promising and novel research area with several opportunities.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.06)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.68)
New AI Technology can lead to privacy invasion of human minds - Cybersecurity Insiders
Scientists from the University of Texas have developed a new AI model that can scan brains and read minds. It was developed with a hardship of over 7-years with an aim to help read the minds of people who cannot speak. The technology behind this new mode of communication decoding is called Functional Magnetic Resonance Imaging (fMRI) that conceptualizes arbitrary stimuli that a person's brain is grasping or analyzing as a natural language in real-time. In simple terms, scientists can scan three parts of the brain and feed that data scan to ML algorithms to analyze the natural language circulating in a person's mind. This can be achieved with the help of electrodes that are planted on the forehead or the shaved head of a person to read a subject's thoughts.
- North America > United States > Texas (0.31)
- Asia > China (0.21)
- Health & Medicine > Health Care Technology (0.61)
- Health & Medicine > Diagnostic Medicine > Imaging (0.61)
- Information Technology > Security & Privacy (0.53)
- Government > Military > Cyberwarfare (0.40)
The Ethics of Artificial Intelligence in the Workplace – Workforce
Artificial intelligence is a branch of computer science dealing with the simulation of intelligent behavior in computers or the capability of a machine to imitate intelligent human behavior. Despite its nascent nature, the ubiquity of AI applications is already transforming everyday life for the better. Whether discussing smart assistants like Apple's Siri or Amazon's Alexa, applications for better customer service or the ability to utilize big data insights to streamline and enhance operations, AI is quickly becoming an essential tool of modern life and business. In fact, according to statistics from Adobe, only 15 percent of enterprises are using AI as of today, but 31 percent are expected to add it over the coming 12 months, and the share of jobs requiring AI has increased by 450 percent since 2013. Leveraging clues from their environment, artificially intelligent systems are programmed by humans to solve problems, assess risks, make predictions and take actions based on input data.
- North America > United States > New Hampshire (0.05)
- North America > United States > Illinois (0.05)
- North America > United States > Colorado (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Consumer Products & Services (0.71)
Facebook's dating app is finally making privacy invasion sexy
Thank God Facebook is finally offering a dating app. Who better to entrust with the most intimate parts of our lives than Mark Zuckerberg, the king of privacy? I assume Zuck will be building it off of one of the early projects that established him as a wunderkind: FaceMash. You may remember it – it's the one where he hacked into campus websites, collecting pictures that allowed Harvard students to rank each other by hotness. With Facebook dating, the FaceMash dream is at last becoming reality.
PRIVACY INVASION: Defining the Terms of Engagement
More fallout this week from the Edward Snowden leaks two years ago: the US Second Court of Appeals ruled May 7th that the NSA's mass phone surveillance is illegal, but did not issue an injunction to stop the Agency's bulk data collection. Meanwhile, many of us knowingly give up privacy online for digital convenience. As the concept of privacy continues to evolve, leading technologists and thinkers debate what privacy means and how much should we want and expect. VIDEO: Clockwise from left: voice-recognition pioneer Janet Baker, co-founder of Dragon Systems; Rajiv Maheswaran, CEO, Second Spectrum, a data software company that's helping NBA teams up their game; Behavioral economist Dan Ariely, author of predictably insightful books like Predictably Irrational; Polymath industrial designer Yves Behar, Founder, fuseproject; Chief Creative Officer, Jawbone; COO, August; Ali Kashani, Founder and CTO, Neurio, an Internet of Things start-up that is monitoring your house; Fei-Fei Li, Director of the Stanford Artificial Intelligence Lab and the Stanford Vision Lab, is teaching computers to see; and Rodney Brooks, Founder and CTO, Rethink Robotics and Co-founder of iRobot, is concerned that Google knows every move he makes but is tempted by their tools nonetheless.
- Law (1.00)
- Information Technology (1.00)
- Government > Regional Government > North America Government > United States Government (0.63)
Instance-Privacy Preserving Crowdsourcing
Kajino, Hiroshi (The University of Tokyo) | Baba, Yukino (National Institute of Informatics) | Kashima, Hisashi (Kyoto University)
Crowdsourcing is a technique to outsource tasks to a number of workers. Although crowdsourcing has many advantages, it gives rise to the risk that sensitive information may be leaked, which has limited the spread of its popularity. Task instances (data workers receive to process tasks) often contain sensitive information, which can be extracted by workers. For example, in an audio transcription task, an audio file corresponds to an instance, and the content of the audio (e.g., the abstract of a meeting) can be sensitive information. In this paper, we propose a quantitative analysis framework for the instance privacy problem. The proposed framework supplies us performance measures of instance privacy preserving protocols. As a case study, we apply the proposed framework to an instance clipping protocol and analyze the properties of the protocol. The protocol preserves privacy by clipping instances to limit the amount of information workers obtain. The results show that the protocol can balance task performance and instance privacy preservation. They also show that the proposed measure is consistent with standard measures, which validates the proposed measure.